高分辨率光触觉传感器越来越多地用于机器人学习环境中,因为它们能够捕获与试剂环境相互作用直接相关的大量数据。但是,由于触觉机器人平台的高成本,专业的仿真软件以及在不同传感器之间缺乏通用性的模拟方法,因此在该领域的研究障碍很高。在这封信中,我们将触觉健身房的模拟器扩展到两种最受欢迎​​的类型类型的三个新的光学触觉传感器(Tactip,Digit和Digitac),分别是Gelsight Style(基于图像遮蔽)和Tactip Style(基于标记)。我们证明,尽管实际触觉图像之间存在显着差异,但可以与这三个不同的传感器一起使用单个SIM到实现的方法,以实现强大的现实性能。此外,我们通过将其调整为廉价的4道机器人组来降低对拟议任务的进入障碍,从而进一步使该基准的传播。我们在三个需要触摸感的身体相互交互的任务上验证了扩展环境:对象推动,边缘跟随和表面跟随。我们实验验证的结果突出了这些传感器之间的一些差异,这可能有助于未来的研究人员选择并自定义触觉传感器的物理特征,以进行不同的操纵场景。
translated by 谷歌翻译
深度学习与高分辨率的触觉传感相结合可能导致高度强大的灵巧机器人。但是,由于专业设备和专业知识,进度很慢。数字触觉传感器可使用Gelsight型传感器提供低成本的高分辨率触摸。在这里,我们将数字定制为基于柔软仿生光学触觉传感器的Tactip家族具有3D打印的传感表面。 Digit-Tactip(Digitac)可以在这些不同的触觉传感器类型之间进行直接比较。为了进行此比较,我们引入了一个触觉机器人系统,该机器人系统包括桌面臂,坐骑和3D打印的测试对象。我们将触觉伺服器控制与Posenet深度学习模型一起比较数字,Digitac和Tactip,以在3D形状上进行边缘和表面跟随。这三个传感器在姿势预测上的性能类似,但是它们的构造导致伺服控制的性能不同,为研究人员选择或创新触觉传感器提供了指导。复制此研究的所有硬件和软件将公开发布。
translated by 谷歌翻译
仿真最近已成为深度加强学习,以安全有效地从视觉和预防性投入获取一般和复杂的控制政策的关键。尽管它与环境互动直接关系,但通常认为触觉信息通常不会被认为。在这项工作中,我们展示了一套针对触觉机器人和加强学习量身定制的模拟环境。提供了一种简单且快速的模拟光学触觉传感器的方法,其中高分辨率接触几何形状表示为深度图像。近端策略优化(PPO)用于学习所有考虑任务的成功策略。数据驱动方法能够将实际触觉传感器的当前状态转换为对应的模拟深度图像。此策略在物理机器人上实时控制循环中实现,以演示零拍摄的SIM-TO-REAL策略转移,以触摸感的几个物理交互式任务。
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
In this paper we derive a PAC-Bayesian-Like error bound for a class of stochastic dynamical systems with inputs, namely, for linear time-invariant stochastic state-space models (stochastic LTI systems for short). This class of systems is widely used in control engineering and econometrics, in particular, they represent a special case of recurrent neural networks. In this paper we 1) formalize the learning problem for stochastic LTI systems with inputs, 2) derive a PAC-Bayesian-Like error bound for such systems, 3) discuss various consequences of this error bound.
translated by 谷歌翻译
We demonstrate how efficient autonomous drone swarms can be in detecting and tracking occluded targets in densely forested areas, such as lost people during search and rescue missions. Exploration and optimization of local viewing conditions, such as occlusion density and target view obliqueness, provide much faster and much more reliable results than previous, blind sampling strategies that are based on pre-defined waypoints. An adapted real-time particle swarm optimization and a new objective function are presented that are able to deal with dynamic and highly random through-foliage conditions. Synthetic aperture sensing is our fundamental sampling principle, and drone swarms are employed to approximate the optical signals of extremely wide and adaptable airborne lenses.
translated by 谷歌翻译
Generative AI has matured to a point where large-scale models can generate text that seems indistinguishable from human-written text and remarkably photorealistic images. Automatically measuring how close the distribution of generated data is to the target real data distribution is a key step in diagnosing existing models and developing better models. We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images. These scores are statistical summaries of divergence frontiers capturing two types of errors in generative modeling. We explore four approaches to statistically estimate these scores: vector quantization, non-parametric estimation, classifier-based estimation, and parametric Gaussian approximations. We provide statistical bounds for the vector quantization approach. Empirically, we find that the proposed scores paired with a range of $f$-divergences and statistical estimation methods can quantify the gaps between the distributions of human-written text and those of modern neural language models by correlating with human judgments and identifying known properties of the generated texts. We conclude the paper by demonstrating its applications to other AI domains and discussing practical recommendations.
translated by 谷歌翻译
Previous work has shown the potential of deep learning to predict renal obstruction using kidney ultrasound images. However, these image-based classifiers have been trained with the goal of single-visit inference in mind. We compare methods from video action recognition (i.e. convolutional pooling, LSTM, TSM) to adapt single-visit convolutional models to handle multiple visit inference. We demonstrate that incorporating images from a patient's past hospital visits provides only a small benefit for the prediction of obstructive hydronephrosis. Therefore, inclusion of prior ultrasounds is beneficial, but prediction based on the latest ultrasound is sufficient for patient risk stratification.
translated by 谷歌翻译